16 research outputs found

    Privacy protection and energy optimization for 5G-aided industrial internet of things

    Get PDF
    The 5G is expected to revolutionize every sector of life by providing interconnectivity of everything everywhere at high speed. However, massively interconnected devices and fast data transmission will bring the challenge of privacy as well as energy deficiency. In today's fast-paced economy, almost every sector of the economy is dependent on energy resources. On the other hand, the energy sector is mainly dependent on fossil fuels and is constituting about 80% of energy globally. This massive extraction and combustion of fossil fuels lead to a lot of adverse impacts on health, environment, and economy. The newly emerging 5G technology has changed the existing phenomenon of life by connecting everything everywhere using IoT devices. 5G enabled IIoT devices has transformed everything from traditional to smart, e.g. smart city, smart healthcare, smart industry, smart manufacturing etc. However, massive I/O technologies for providing D2D connection has also created the issue of privacy that need to be addressed. Privacy is the fundamental right of every individual. 5G industries and organizations need to preserve it for their stability and competency. Therefore, privacy at all three levels (data, identity and location) need to be maintained. Further, energy optimization is a big challenge that needs to be addressed for leveraging the potential benefits of 5G and 5G aided IIoT. Billions of IIoT devices that are expected to communicate using the 5G network will consume a considerable amount of energy while energy resources are limited. Therefore, energy optimization is a future challenge faced by 5G industries that need to be addressed. To fill these gaps, we have provided a comprehensive framework that will help energy researchers and practitioners in better understanding of 5G aided industry 4.0 infrastructure and energy resource optimization by improving privacy. The proposed framework is evaluated using case studies and mathematical modelling. © 2020 Institute of Electrical and Electronics Engineers Inc.. All rights reserved

    An Intelligent Medical Imaging Approach for Various Blood Structure Classifications

    No full text
    Blood is a vital body fluid and can be instrumental in identifying various pathological conditions. Nowadays, a lot of people are suffering from COVID-19 and every country has its own limited testing capacity. Consequently, a system is required to help doctors analyze a patient’s blood structure including COVID-19. Therefore, in this paper, we extracted and selected blood features by proposing a new feature extraction and selection method named stepwise linear discriminant analysis (SWLDA). SWLDA emphasizes on picking confined features from blood structure images and discerning its class based on reversion value such as partial F value. SWLDA begins with picking an equivalence comprising the sole finest X variable and then puts in effort to add more Xs individually, providing the situations are adequate. The process of adding and picking is based on F value to determine which variable would be entered. Then, the picked or the default F-to-enter value is compared with the uppermost partial F value. After this step, the forward addition or backward removal begins and whether the partial test values for all the predictor variables already in the line are estimated is known. Then, the comparison is made between the lowermost partial test value (FL) and preselected or defaulting consequence levels such as F0 (i.e., if F0 > FL, the variable ZL is removed, and the F test is started again; otherwise, the regression equation is adopted). Finally, the system is trained by employing support vector machine (SVM) to label the blood images. The performance of the proposed approach is assessed by employing 8 different datasets of blood structures. It is assured that the proposed method has achieved significant results under different blood structure images including COVID-19

    Formulating Enhancement and Restoration Strategy to Improve the Quality of Dusty Images

    No full text

    RTF-RCNN: An Architecture for Real-Time Tomato Plant Leaf Diseases Detection in Video Streaming Using Faster-RCNN

    No full text
    In today’s era, vegetables are considered a very important part of many foods. Even though every individual can harvest their vegetables in the home kitchen garden, in vegetable crops, Tomatoes are the most popular and can be used normally in every kind of food item. Tomato plants get affected by various diseases during their growing season, like many other crops. Normally, in tomato plants, 40–60% may be damaged due to leaf diseases in the field if the cultivators do not focus on control measures. In tomato production, these diseases can bring a great loss. Therefore, a proper mechanism is needed for the detection of these problems. Different techniques were proposed by researchers for detecting these plant diseases and these mechanisms are vector machines, artificial neural networks, and Convolutional Neural Network (CNN) models. In earlier times, a technique was used for detecting diseases called the benchmark feature extraction technique. In this area of study for detecting tomato plant diseases, another model was proposed, which was known as the real-time faster region convolutional neural network (RTF-RCNN) model, using both images and real-time video streaming. For the RTF-RCNN, we used different parameters like precision, accuracy, and recall while comparing them with the Alex net and CNN models. Hence the final result shows that the accuracy of the proposed RTF-RCNN is 97.42%, which is higher than the rate of the Alex net and CNN models, which were respectively 96.32% and 92.21%

    ENHANCEMENT AND RESTORATION OF DUST IMAGES

    No full text
    This dissertation focuses on the modeling of dust noise on ordinary images and formulating enhancement and restoration strategies to improve the quality of dusty images using a sequence of image processing steps. Analyses of images acquired in dusty environments show that the images tend to have noise, blur, a small dynamic range, low contrast, diminished blue components, and high red components. This dissertation proposes that the dust noise model on ordinary images consists of the three steps. The first step contends that the atmospheric turbulence-blurring model proposed by Hufnagel and Stanley is used to generate a blur noise. The second step is that the gamma correction is used to adjust image brightness. The gamma parameter is calculated directly from the color components without prior knowledge. The third step consists of a statistical method that is used to simulate real dust images colors contrasts. This noise model is designed based on a statistical analysis of real dust images histograms. It is an adaptive model that can extract necessary parameters directly from the original images without prior knowledge in order to degrade each color component independently. As a result, the dust noise model could simulate real dust images. The second contribution of this dissertation is to develop an automatic color correction algorithm (D) that improves the quality of dusty images. This algorithm is based on the Wiener filter, luminance stretching, and a modified homomorphic filter. The Wiener filter is applied to restore image pixels and remove the blur noise. Subsequently, an image is converted from RGB to YCbCr color model. In YCbCr, the luminance is stretched to adjust the illumination of the image at the same level automatically. In the RGB color model, the modified homomorphic filter is applied to enhance the image contrast in order to get true colors. The third contribution of this dissertation is to develop statistical adaptive algorithms (S) that consists of the following: use of the Wiener filter to restore image pixels and application of contrast stretching methods that improve the contrast of an image. Every pixel value in all color components of red, green, and blue is stretched between the smallest and largest values using the same scaling function in order to preserve the correct contrast. Once this is done, the image is converted into the HSI color model in order to stretch the intensity. The intensity is adjusted to get true colors and illumination. Finally, a color balance approach is used to remove colorcast. Enhancement experiments are conducted on real and simulated dusty images. Both algorithms are evaluated using the four well-known methods: human perception, root mean square error, peak signal to noise ratio, and structural similarity index. The use of these methods ensures that the introduced algorithms are thoroughly evaluated. The results show that the introduced algorithms are quite effective in enhancing dusty images. Furthermore, the results are superior to those obtained through histogram equalization, gray world, and white patch algorithms. In addition, the complexities of the introduced algorithms are very low, making them attractive for real time-image processing

    Automated Breast Cancer Detection Models Based on Transfer Learning

    No full text
    Breast cancer is among the leading causes of mortality for females across the planet. It is essential for the well-being of women to develop early detection and diagnosis techniques. In mammography, focus has contributed to the use of deep learning (DL) models, which have been utilized by radiologists to enhance the needed processes to overcome the shortcomings of human observers. The transfer learning method is being used to distinguish malignant and benign breast cancer by fine-tuning multiple pre-trained models. In this study, we introduce a framework focused on the principle of transfer learning. In addition, a mixture of augmentation strategies were used to prevent overfitting and produce stable outcomes by increasing the number of mammographic images; including several rotation combinations, scaling, and shifting. On the Mammographic Image Analysis Society (MIAS) dataset, the proposed system was evaluated and achieved an accuracy of 89.5% using (residual network-50) ResNet50, and achieved an accuracy of 70% using the Nasnet-Mobile network. The proposed system demonstrated that pre-trained classification networks are significantly more effective and efficient, making them more acceptable for medical imaging, particularly for small training datasets

    COVID-19 Diagnosis Using an Enhanced Inception-ResNetV2 Deep Learning Model in CXR Images

    No full text
    The COVID-19 pandemic has a significant negative effect on people’s health, as well as on the world’s economy. Polymerase chain reaction (PCR) is one of the main tests used to detect COVID-19 infection. However, it is expensive, time-consuming, and lacks sufficient accuracy. In recent years, convolutional neural networks have grabbed many researchers’ attention in the machine learning field, due to its high diagnosis accuracy, especially the medical image recognition. Many architectures such as Inception, ResNet, DenseNet, and VGG16 have been proposed and gained an excellent performance at a low computational cost. Moreover, in a way to accelerate the training of these traditional architectures, residual connections are combined with inception architecture. Therefore, many hybrid architectures such as Inception-ResNetV2 are further introduced. This paper proposes an enhanced Inception-ResNetV2 deep learning model that can diagnose chest X-ray (CXR) scans with high accuracy. Besides, a Grad-CAM algorithm is used to enhance the visualization of the infected regions of the lungs in CXR images. Compared with state-of-the-art methods, our proposed paper proves superiority in terms of accuracy, recall, precision, and F1-measure

    A robust clustering algorithm using spatial fuzzy C-means for brain MR images

    No full text
    Magnetic Resonance Imaging (MRI) is a medical imaging modality that is commonly employed for the analysis of different diseases. However, these images come with several problems such as noise and other imaging artifacts added during acquisition process. The researchers have actual challenges for segmentation under the consideration of these effects. In medical images, a well-known clustering approach like Fuzzy C-Means widely used for segmentation. The performance of FCM algorithm is fast in noise-free images; however, this method did not consider the spatial context of the image due to which its performance suffers when images corrupted with noise and other imaging relics. In this paper, a weighted spatial Fuzzy C-Means (wsFCM) segmentation method is proposed that considered the spatial information of image. Moreover, a spatial function is also developed that integrate a membership function. In order assess this function, a neighborhood window is established around a pixel and more weights have been assigned to those pixels which have greater correlation with central pixel in local neighborhood. By integration of this spatial function in membership function, the modified membership function strengthens the original membership function in handling the noise and intensity inhomogeneity, which has the ability to preserves and maintains structural information like edges. A comprehensive set of experimentation is performed on publicly accessible simulated and real standard brain MRI datasets. The performance of the proposed method has been compared with existing state-of-the-art methods. The results show that the performance of the proposed method is better and robust in handling noise and intensity inhomogeneity than of the existing works. Keywords: Clustering algorithm, MRI, Fuzzy C-mean

    FAIR Health Informatics: A Health Informatics Framework for Verifiable and Explainable Data Analysis

    No full text
    The recent COVID-19 pandemic has hit humanity very hard in ways rarely observed before. In this digitally connected world, the health informatics and investigation domains (both public and private) lack a robust framework to enable rapid investigation and cures. Since the data in the healthcare domain are highly confidential, any framework in the healthcare domain must work on real data, be verifiable, and support reproducibility for evidence purposes. In this paper, we propose a health informatics framework that supports data acquisition from various sources in real-time, correlates these data from various sources among each other and to the domain-specific terminologies, and supports querying and analyses. Various sources include sensory data from wearable sensors, clinical investigation (for trials and devices) data from private/public agencies, personnel health records, academic publications in the healthcare domain, and semantic information such as clinical ontologies and the Medical Subject Heading ontology. The linking and correlation of various sources include mapping personnel wearable data to health records, clinical oncology terms to clinical trials, and so on. The framework is designed such that the data are Findable, Accessible, Interoperable, and Reusable with proper Identity and Access Mechanisms. This practically means to tracing and linking each step in the data management lifecycle through discovery, ease of access and exchange, and data reuse. We present a practical use case to correlate a variety of aspects of data relating to a certain medical subject heading from the Medical Subject Headings ontology and academic publications with clinical investigation data. The proposed architecture supports streaming data acquisition and servicing and processing changes throughout the lifecycle of the data management. This is necessary in certain events, such as when the status of a certain clinical or other health-related investigation needs to be updated. In such cases, it is required to track and view the outline of those events for the analysis and traceability of the clinical investigation and to define interventions if necessary

    Estimation of Organizational Competitiveness by a Hybrid of One-Dimensional Convolutional Neural Networks and Self-Organizing Maps Using Physiological Signals for Emotional Analysis of Employees

    No full text
    The theory of modern organizations considers emotional intelligence to be the metric for tools that enable organizations to create a competitive vision. It also helps corporate leaders enthusiastically adhere to the vision and energize organizational stakeholders to accomplish the vision. In this study, the one-dimensional convolutional neural network classification model is initially employed to interpret and evaluate shifts in emotion over a period by categorizing emotional states that occur at particular moments during mutual interaction using physiological signals. The self-organizing map technique is implemented to cluster overall organizational emotions to represent organizational competitiveness. The analysis of variance test results indicates no significant difference in age and body mass index for participants exhibiting different emotions. However, a significant mean difference was observed for the blood volume pulse, galvanic skin response, skin temperature, valence, and arousal values, indicating the effectiveness of the chosen physiological sensors and their measures to analyze emotions for organizational competitiveness. We achieved 99.8% classification accuracy for emotions using the proposed technique. The study precisely identifies the emotions and locates a connection between emotional intelligence and organizational competitiveness (i.e., a positive relationship with employees augments organizational competitiveness)
    corecore